我们介绍Protopool,一个可解释的图像分类模型,其中包含类的原型池。培训比现有方法更直接,因为它不需要修剪阶段。通过向特定类别引入完全可分辨分配的原型来获得它。此外,我们介绍了一种新的焦点相似度,将模型集中在罕见的前景特征上。我们表明Protopool在Cub-200-2011和斯坦福汽车数据集上获得最先进的准确性,大大减少了原型的数量。我们提供了对方法和用户学习的理论分析,以表明我们的原型比具有竞争方法所获得的原型更具独特。
translated by 谷歌翻译
最近,引入了图像表示学习的自我监督方法,以与其完全监督的竞争对手相比,以较高的结果或卓越的结果提供了解释自我监督的方法的相应努力。在这一观察过程中,我们引入了一个新颖的视觉探测框架,用于通过利用自然语言处理中使用的探测任务来解释自我监督模型。探测任务需要有关图像部分之间语义关系的知识。因此,我们提出了一种系统的方法来获得视觉,视觉,上下文和分类学等自然语言的类似物。我们的建议基于Marr的视觉计算理论和质地,形状和线条等特征。我们在解释自我监督的表示的背景下显示了这些类似物的有效性和适用性。我们的主要发现强调,语言和视觉之间的关系可以作为发现机器学习模型如何工作(独立于数据模式)的有效但直观的工具。我们的工作打开了大量的研究途径,通向更可解释和透明的AI。
translated by 谷歌翻译
The celebrated FedAvg algorithm of McMahan et al. (2017) is based on three components: client sampling (CS), data sampling (DS) and local training (LT). While the first two are reasonably well understood, the third component, whose role is to reduce the number of communication rounds needed to train the model, resisted all attempts at a satisfactory theoretical explanation. Malinovsky et al. (2022) identified four distinct generations of LT methods based on the quality of the provided theoretical communication complexity guarantees. Despite a lot of progress in this area, none of the existing works were able to show that it is theoretically better to employ multiple local gradient-type steps (i.e., to engage in LT) than to rely on a single local gradient-type step only in the important heterogeneous data regime. In a recent breakthrough embodied in their ProxSkip method and its theoretical analysis, Mishchenko et al. (2022) showed that LT indeed leads to provable communication acceleration for arbitrarily heterogeneous data, thus jump-starting the $5^{\rm th}$ generation of LT methods. However, while these latest generation LT methods are compatible with DS, none of them support CS. We resolve this open problem in the affirmative. In order to do so, we had to base our algorithmic development on new algorithmic and theoretical foundations.
translated by 谷歌翻译
We leverage probabilistic models of neural representations to investigate how residual networks fit classes. To this end, we estimate class-conditional density models for representations learned by deep ResNets. We then use these models to characterize distributions of representations across learned classes. Surprisingly, we find that classes in the investigated models are not fitted in an uniform way. On the contrary: we uncover two groups of classes that are fitted with markedly different distributions of representations. These distinct modes of class-fitting are evident only in the deeper layers of the investigated models, indicating that they are not related to low-level image features. We show that the uncovered structure in neural representations correlate with memorization of training examples and adversarial robustness. Finally, we compare class-conditional distributions of neural representations between memorized and typical examples. This allows us to uncover where in the network structure class labels arise for memorized and standard inputs.
translated by 谷歌翻译
Hierarchical decomposition of control is unavoidable in large dynamical systems. In reinforcement learning (RL), it is usually solved with subgoals defined at higher policy levels and achieved at lower policy levels. Reaching these goals can take a substantial amount of time, during which it is not verified whether they are still worth pursuing. However, due to the randomness of the environment, these goals may become obsolete. In this paper, we address this gap in the state-of-the-art approaches and propose a method in which the validity of higher-level actions (thus lower-level goals) is constantly verified at the higher level. If the actions, i.e. lower level goals, become inadequate, they are replaced by more appropriate ones. This way we combine the advantages of hierarchical RL, which is fast training, and flat RL, which is immediate reactivity. We study our approach experimentally on seven benchmark environments.
translated by 谷歌翻译
持续学习系统将知识从先前看到的任务转移以最大程度地提高新任务的能力是该领域的重大挑战,从而限制了持续学习解决方案对现实情况的适用性。因此,本研究旨在扩大我们在不断加强学习的特定情况下对转移及其驱动力的理解。我们采用SAC作为基础RL算法和持续的世界作为连续控制任务的套件。我们系统地研究SAC(演员和评论家,勘探和数据)的不同组成部分如何影响转移功效,并提供有关各种建模选项的建议。在最近的连续世界基准中评估了最佳的选择,即称为clonex-sac。 Clonex-SAC获得了87%的最终成功率,而Packnet的80%是基准中的最佳方法。此外,根据连续世界提供的指标,转移从0.18增至0.54。
translated by 谷歌翻译
素描和项目是一个框架,它统一了许多已知的迭代方法来求解线性系统及其变体,并进一步扩展了非线性优化问题。它包括流行的方法,例如随机kaczmarz,坐标下降,凸优化的牛顿方法的变体等。在本文中,我们通过新的紧密频谱边界为预期的草图投影矩阵获得了素描和项目的收敛速率的敏锐保证。我们的估计值揭示了素描和项目的收敛率与另一个众所周知但看似无关的算法家族的近似误差之间的联系,这些算法使用草图加速了流行的矩阵因子化,例如QR和SVD。这种连接使我们更接近准确量化草图和项目求解器的性能如何取决于其草图大小。我们的分析不仅涵盖了高斯和次高斯的素描矩阵,还涵盖了一个有效的稀疏素描方法,称为较少的嵌入方法。我们的实验备份了理论,并证明即使极稀疏的草图在实践中也显示出相同的收敛属性。
translated by 谷歌翻译
在针对自闭症谱系障碍患者的机器人辅助治疗中,如果必须手动控制机器人,则在治疗过程中的治疗师工作量会增加。为了允许治疗师专注于与人的互动,机器人应该更加自主,即它应该能够解释该人的状态并根据其行为不断适应其行为。在本文中,我们开发了一个个性化的机器人行为模型,该模型可以在活动期间的机器人决策过程中使用。该行为模型是在从真实交互数据中学到的用户模型的帮助下训练的。我们将Q学习用于此任务,因此结果表明该策略需要大约10,000次迭代才能收敛。因此,我们调查了改善收敛速度的政策转移;我们表明这是一个可行的解决方案,但是不适当的初始政策可以导致最终的最终回报。
translated by 谷歌翻译
我们提出了一种利用分布人工神经网络的概率电价预测(EPF)的新方法。EPF的新型网络结构基于包含概率层的正则分布多层感知器(DMLP)。使用TensorFlow概率框架,神经网络的输出被定义为一个分布,是正常或可能偏斜且重尾的Johnson的SU(JSU)。在预测研究中,将该方法与最新基准进行了比较。该研究包括预测,涉及德国市场的日常电价。结果显示了对电价建模时较高时刻的重要性的证据。
translated by 谷歌翻译
我们引入了一种内部重播的新方法,该方法根据网络深度调节排练的频率。虽然重播策略减轻了神经网络中灾难性遗忘的影响,但最近对生成重播的作品表明,仅在网络的更深层次上进行排练才能改善持续学习的性能。但是,生成方法引入了其他计算开销,从而限制了其应用程序。通过观察到的神经网络的早期层次忘记忘记了,我们建议在重播过程中使用中级功能更新频率不同的网络层。这通过省略了发电机的更深层和主要模型的早期层来减少计算负担。我们命名我们的方法渐进式潜在重播,并表明它在使用较少的资源时表现优于内部重播。
translated by 谷歌翻译